Stats 300 b : Theory of Statistics Winter 2018 Lecture 15 – February 27

نویسندگان

  • John Duchi
  • Ruiyang Song
چکیده

Theorem 1. Let {Xn}n=1 ⊂ L∞(T ) be a sequence of stochastic processes on T . The followings are equivalent. (1) Xn converge in distribution to a tight stochastic process X ∈ L∞(T ); (2) both of the followings: (a) Finite Dimensional Convergence (FIDI): for every k ∈ N and t1, · · · , tk ∈ T , (Xn(t1), · · · , Xn(tk)) converge in distribution as n→∞; (b) the sequence {Xn} is asymptotically stochastically equicontinuous. Proof (1)⇒ (2) is trivial. Here we only prove (2)⇒ (1). Part I: Consider countable subsets of T . Let m ∈ N, and construct partitions Tm 1 , · · · , Tm km of T such that

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

STATS 300 A : Theory of Statistics Fall 2015 Lecture 11 — October 27

This lemma allows us to find a minimax estimator for a particular tractable submodel, and then show that the worst-case risk for the full model is equal to that of the submodel (that is, the worst-case risk doesn’t rise as you go to the full model). In this case, using the Lemma, we can argue that the estimator we found is also minimax for the full model. This was similar to how we justified mi...

متن کامل

STATS 300 A : Theory of Statistics Fall 2015 Lecture 3 —

Before discussing today’s topic matter, let’s take a step back and situate ourselves with respect to the big picture. As mentioned in Lecture 1, a primary focus of this course is optimal inference. As a first step toward reasoning about optimality, we began to examine which statistics of the data that we observe are actually relevant in a given inferential task. We learned about lossless data r...

متن کامل

Stats 300 C : Theory of Statistics Spring 2017 Lecture 25 — Jun 5 , 2017

In today’s class we will consider the above problem (1) when μ is sparse. We begin by motivating the special case of sparsity. Thereafter we show that a thraueshold estimator is optimal (in minimax sense), if the sparsity of μ is known. We then introduce FDR-threshold estimator, which is oblivious to μ’s sparsity and is yet optimal. We then draw parallel between this problem and multiplehypothe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018